Decision Tree
Decision Tree is a supervised learning algorithm that can be used for both classification and regression problems. It is a tree-like structure where each internal node denotes a test on an attribute, each branch represents an outcome of the test, and each leaf node holds a class label. The paths from root to leaf represent classification rules. And generally, a decision tree has a high variance and low bias.
Regression Tree
Two steps to build a regression tree:
- Divide the predictor space (the set of all possible values of the predictors) into disjoint and non-overlapping regions
- For every observation that falls into region , we make the same prediction which is simply the mean of the response values in that region.
But how we get those regions? Theortically, we can have any shape of regions we want, but if we want to have a better interpretability, we want divide those predictor space into multi-dimensional rectangles. That is, find rectangle regions to minimize the RSS: where is the mean of the response values in region . It's seems question solved, but actually computationally infeasible to enumerate all possible partitions of the feature space into regions. Therefore, we commonly use top-down greedy approach (known as recursive binary splitting) to find the optimal partition.
- we split the predictor space into two regions using a single predictor and a cutpoint such that the resulting RSS is as small as possible and recursively apply the same procedure to the two resulting regions.
- i.e. where:
- and
- and are the average of the response of the training observations in each region.
A smaller tree with fewer splits might lead to lower variance and better interpretation at the cost of a little bias. And we hard to do cross-validation to choose the optimal tree size to have lowest test error due to the computational cost.
Such common way to have a smaller tree is to first grow a very large tree, then prune it back. We can use cost complexity pruning a.k.a. weakest link pruning. We consider a sequence of trees indexed by a nonnegative tuning parameter . For each value of find a subtree that minimizes the following cost function: . Here indicates the number of terminal nodes in the tree .
- The tuning parameter controls the trade-off between the subtree's complexity and its fit to the training data. The larger is, the more complex the tree will be.
- gives the full tree .
- is chosen by cross-validation.
That is, the algorithm to build a regression tree is:
- Use recursive binary splitting to grow a very large tree on the training data, stopping only when each terminal node has fewer than some minimum number of observations.
- Apply cost complexity pruning to the large tree in order to obtain a sequence of best subtrees, as a function of .
- Use -fold cross-validation to choose . That is, divide the training observations into folds for each
- Repeat steps 1 and 2 on the training data except that th fold of the training data
- Evaluate the mean squared prediction error on the data in the left-out th fold, as a function of .
- Average the results over the folds, and pick to minimize the average error.
- Return the subtree from step 2 that corresponds to the chosen value of .
Classification Tree
For a classification tree, we predict that each observation belongs to the most commonly occurring class of training observations in the region to which it belongs. Similar as regression tree, we also use recursive binary splitting, instead minimize the RSS, we minimize the misclassification rate. That is, for a region with classes of labels, we have , where is an indicator function that equals 1 if and 0 otherwise. Then the misclassification rate is . But in fact, misclassification error is not sufficiently sensitive for change of then we can not directly use this to splits and to build a classification tree.
Firstly we define entropy of a discrete random variable is a number that quantifies the uncertainty inherent in its possible outcomes. Here such uncertainty also can be understand as node purity. We calculate such entropy based metric, and we called as Deviance which is given by , where is the proportion of training observations in region that belong to class .
- We also have a metric of total variance across all classes,we called it Gini index which is given by .
- both metric used to measure of node purity and they quite similar numerically, then both can be used to prune the tree.
Pros and Cons
If we only can use linear model and tree approach to do regression, we can see that if such relationship between response and predictors is linear, then linear model is better. But if the relationship is non-linear, then tree approach is better.
Pros:
- Easy to interpret and visualize.
- Can handle both numerical and categorical predictors.
- Can handle multi-output problems without creating many dummy variables.
Cons:
- Not stable and powerful as others approaches. (but can be improved by bagging, random forest, boosting, etc.)